Steinar H. Gunderson: Solskogen 2016 videos
I just published the videos from Solskogen 2016
on Youtube; you can find them all in this
playlist.
The are basically exactly what was being sent out on the live stream, frame
for frame, except that the audio for the live shader compos has been
remastered, and of course a lot of dead time has been cut out (the stream
was sending over several days, but most of the time, only the information
loop from the bigscreen).
YouTube doesn't really support the variable 50/60 Hz frame
rate we've been using well as far as I can tell, but mostly it seems to go to
some 60 Hz upconversion, which is okay enough, because the rest of your setup
most likely isn't free-framerate anyway.
Solskogen is interesting in that we're trying to do a high-quality stream
with essentially zero money allocated to it; where something like Debconf
can use 2500 for renting and transporting equipment (granted, for two
or three rooms and not our single stream), we're largely dependent on
personal equipment as well as borrowing things here and there. (I think
we borrowed stuff from more or less ten distinct places.) Furthermore,
we're nowhere near the situation of two cameras, a laptop, perhaps a few
microphones ; not only do you expect to run full 1080p60 to the bigscreen
and switch between that and information slides for each production, but
an Amiga 500 doesn't really have an HDMI port, and Commodore 64 delivers an
infamously broken 50.12 Hz signal that you really need to deal with carefully
if you want it to not look like crap.
These two factors together lead to a rather eclectic setup; here, visualized
beautifully from my ASCII art by
ditaa:
Of course, for me, the really interesting part here is near the end of the
chain, with Nageru, my live video mixer, doing
the stream mixing and encoding. (There's also
Cubemap, the video reflector, but honestly,
I never worry about that anymore. Serving 150 simultaneous clients is just
not something to write home about anymore; the only adjustment I would want
to make would probably be some WebSockets support to be able to deal with iOS
without having to use a secondary HLS stream.) Of course, to make things
even more complicated, the live shader compo needs two different inputs
(the two coders' laptops) live on the bigscreen, which was done with two
video capture cards, text chroma-keyed on top from Chroma, and
OBS, because the guy
controlling the bigscreen has different preferences from me. I would take
his screen in as a dirty feed and then put my own stuff around it, like
this:
(Unfortunately, I forgot to take a screenshot of Nageru itself during this
run.)
Solskogen was the first time I'd really used Nageru in production, and
despite super-extensive testing, there's always something that can go wrong.
And indeed there was: First of all, we discovered that the local Internet
line was reduced from 30/10 to 5/0.5 (which is, frankly, unusable for
streaming video), and after we'd half-way fixed that (we got it to
25/4 or so by prodding the ISP, of which we could reserve about 2 for
video demoscene content is really hard to encode, so I'd prefer a lot
more) Nageru started crashing.
It wasn't even crashes I understood anything of. Generally it seemed like the
NVIDIA drivers were returning GL_OUT_OF_MEMORY on things like creating
mipmaps; it's logical that they'd be allocating memory, but we had 6 GB of
GPU memory and 16 GB of CPU memory, and lots of it was free. (The PC we used for encoding was much,
much faster than what you need to run Nageru smoothly, so we had plenty of
CPU power left to run x264 in, although you can of course always want more.)
It seemed to be mostly related to zoom transitions, so I generally avoided
those and ran that night's compos in a more static fashion.
It wasn't until later that night (or morning, if you will) that I actually
understood the bug (through the godsend of the
NVX_gpu_memory_info
extension, which gave me enough information about the GPU memory state that I
understood I wasn't leaking GPU memory at all); I had set Nageru to lock all of its memory used in RAM,
so that it would never ever get swapped out and lose frames for that reason.
I had set the limit for lockable RAM based on my test setup, with 4 GB of RAM, but this
setup had much more RAM, a 1080p60 input (which uses more RAM, of course)
and a second camera, all of which I hadn't been able to test before, since I
simply didn't have the hardware available. So I wasn't hitting the available
RAM, but I was hitting the amount of RAM that Linux was willing to lock into
memory for me, and at that point, it'd rather return errors on memory
allocations (including the allocations the driver needed to
make for its texture memory backings) than to violate the never swap
contract.
Once I fixed this (by simply increasing the amount of lockable memory in
limits.conf), everything was rock-stable, just like it should be, and I could
turn my attention to the actual production. Often during compos, I don't
really need the mixing power of Nageru (it just shows a single input, albeit
scaled using high-quality Lanczos3 scaling on the GPU to get it down from
1080p60 to 720p60), but since entries come in using different sound levels
(I wanted the stream to conform to EBU R128, which it generally did)
and different platforms expect different audio work (e.g., you wouldn't put a
compressor on an MP3 track that was already mastered, but we did that on e.g.
SID tracks since they have nearly zero ability to control the overall volume),
there was a fair bit of manual audio tweaking during some of the compos.
That, and of course, the live 50/60 Hz switches were a lot of fun: If an
Amiga entry was coming up, we'd 1. fade to a camera, 2. fade in an overlay
saying we were switching to 50 Hz so have patience, 3. set the camera as
master clock (because the bigscreen's clock is going to go away soon), 4.
change the scaler from 60 Hz to 50 Hz (takes two clicks and a bit of
waiting), 5. change the scaler input in Nageru from 1080p60 to 1080p50, 6.
steps 3,2,1 in reverse. Next time, I'll try to make that slightly smoother,
especially as the lack of audio during the switch (it comes in on the
bigscreen SDI feed) tended to confuse viewers.
So, well, that was a lot of fun, and it certainly validated that you can
do a pretty complicated real-life stream with Nageru. I have a long list
of small tweaks I want to make, though; nothing beats actual experience
when it comes to improving processes. :-)